131 research outputs found

    Phonetic recalibration does not depend on working memory

    Get PDF
    Listeners use lipread information to adjust the phonetic boundary between two speech categories (phonetic recalibration, Bertelson et al. 2003). Here, we examined phonetic recalibration while listeners were engaged in a visuospatial or verbal memory working memory task under different memory load conditions. Phonetic recalibration was—like selective speech adaptation—not affected by a concurrent verbal or visuospatial memory task. This result indicates that phonetic recalibration is a low-level process not critically depending on processes used in verbal- or visuospatial working memory

    Sound enhances visual perception: Cross-modal effects of auditory organization on vision.

    Get PDF

    Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events

    Get PDF
    In many natural audiovisual events (e.g., a clap of the two hands), the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have reported that there are distinct neural correlates of temporal (when) versus phonetic/semantic (which) content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where) in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual parts. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical sub-additive amplitude reductions (AV − V < A) were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that this N1 suppression was greater for the spatially congruent stimuli. A very early audiovisual interaction was also found at 40–60 ms (P50) in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing

    The crossed-hands deficit in temporal order judgments occurs for present, future, and past hand postures

    Get PDF
    When people judge the temporal order (TOJ task) of two tactile stimuli at the two hands while their hands are crossed, performance is much worse than with uncrossed hands [1]. This crossed-hands deficit is widely considered to indicate interferences of external spatial coordinates with body-centered coordinates in the localization of touch [2]. Similar deficits have also been observed when people are only about to move their hands towards a crossed position [3]-[5], suggesting a predictive update of external spatial coordinates. Here, we extend the investigation of the dynamics of external coordinates during hand movement. Participants performed a TOJ task while they executed an uncrossing or a crossing movement, and during presentation of the TOJ stimuli the present posture of the hands was crossed, uncrossed or in-between. Present, future and past crossed-hands postures decreased performance in the TOJ task, suggesting that the update of external spatial coordinates of touch includes both predictive processes and processes that preserve the recent past. In addition, our data corroborate the flip model of crossed-hands deficits [1], and suggest that more pronounced deficits come along with higher time requirements to resolve interferences

    Horen met de ogen, zien met de oren

    Get PDF

    Increased sub-clinical levels of autistic traits are associated with reduced multisensory integration of audiovisual speech

    Get PDF
    Recent studies suggest that sub-clinical levels of autistic symptoms may be related to reduced processing of artificial audiovisual stimuli. It is unclear whether these findings extent to more natural stimuli such as audiovisual speech. The current study examined the relationship between autistic traits measured by the Autism spectrum Quotient and audiovisual speech processing in a large non-clinical population using a battery of experimental tasks assessing audiovisual perceptual binding, visual enhancement of speech embedded in noise and audiovisual temporal processing. Several associations were found between autistic traits and audiovisual speech processing. Increased autistic-like imagination was related to reduced perceptual binding measured by the McGurk illusion. Increased overall autistic symptomatology was associated with reduced visual enhancement of speech intelligibility in noise. Participants reporting increased levels of rigid and restricted behaviour were more likely to bind audiovisual speech stimuli over longer temporal intervals, while an increased tendency to focus on local aspects of sensory inputs was related to a more narrow temporal binding window. These findings demonstrate that increased levels of autistic traits may be related to alterations in audiovisual speech processing, and are consistent with the notion of a spectrum of autistic traits that extends to the general population

    Suppression of the auditory N1 by visual anticipatory motion is modulated by temporal and identity predictability

    Get PDF
    The amplitude of the auditory N1 component of the event-related potential (ERP) is typically suppressed when a sound is accompanied by visual anticipatory information that reliably predicts the timing and identity of the sound. While this visually-induced suppression of the auditory N1 is considered an early electrophysiological marker of fulfilled prediction, it is not yet fully understood whether this internal predictive coding mechanism is primarily driven by the temporal characteristics, or by the identity features of the anticipated sound. The current study examined the impact of temporal and identity predictability on suppression of the auditory N1 by visual anticipatory motion with an ecologically valid audiovisual event (a video of a handclap). Predictability of auditory timing and identity was manipulated in three different conditions in which sounds were either played in isolation, or in conjunction with a video that either reliably predicted the timing of the sound, the identity of the sound, or both the timing and identity. The results showed that N1 suppression was largest when the video reliably predicted both the timing and identity of the sound, and reduced when either the timing or identity of the sound was unpredictable. The current results indicate that predictions of timing and identity are both essential elements for predictive coding in audition

    Predictive coding in autism spectrum disorder:Electrophysiological alterations in early auditory predictive processing as potential markers for autistic symptomatology

    Get PDF
    Background :  Autism spectrum disorder (ASD) is a pervasive neurodevelopmental disorder that has been linked to a range of perceptual processing alterations, including hypo- and hyperresponsiveness to auditory stimulation. A recently proposed theory that attempts to account for these symptoms suggest that autistic individuals have a decreased ability to anticipate upcoming sensory stimulation. Objectives :  If the ability to anticipate upcoming sensory stimulation is indeed decreased in ASD, perception in ASD could be less affected by prior expectations and more driven by sensory input. Here, we tested this hypothesis with a series of event-related potential (ERP) studies in which we examined the neural correlates of motor-auditory prediction (N1 attenuation), visual-auditory prediction error (omission N1) and deviancy detection of auditory, visual and audiovisual speech (MMN). Methods :  In a series of ERP studies, we first compared the electrophysiological brain response to self- versus externally-initiated tones between a group of individuals with ASD and a group of age matched individuals with typical development. Next, we assessed between-group differences in prediction error signaling by comparing ERPs evoked by unexpected auditory omissions in a sequence of audiovisual recordings of a handclap in which the visual motion reliably predicted the onset and content of the sound. Finally, we examined between group differences in deviancy detection of auditory, visual and audiovisual speech by applying a MMN paradigm. Results :  The results of our first ERP study showed that, unlike in age-matched participants with typical development, self-initiation of tones through a button press did not attenuate the auditory N1 in autistic individuals, indicating that the ability to anticipate the auditory sensory consequences of self-initiated motor actions might be decreased in ASD (van Laarhoven, Stekelenburg, Eussen, & Vroomen, 2019, https://doi.org/10.1002/aur.2087). The results of our second study showed that unexpected omissions of a sound of which the timing and content could be predicted by preceding visual anticipatory motion elicited an increased early auditory omission response (oN1) in the ASD group, indicating that violations of the prediction model produced larger prediction errors in autistic individuals when compared to their peers with typical development (van Laarhoven, Stekelenburg, Eussen, & Vroomen, 2020, https://doi.org/10.1177%2F1362361320926061). Finally, the results of our third study showed that deviancy detection of auditory speech is reduced in autistic individuals, while deviancy detection of visual speech and incongruent audiovisual speech seems to be intact (van Laarhoven et al., in prep). Conclusions :  Taken together, our findings suggest that individuals with ASD may indeed experience difficulties in anticipating upcoming auditory stimulation. Importantly, these difficulties might be due to domain-specific alterations, rather than general impairments in predictive coding. This notion provides potential avenues for future research on electrophysiological markers for autistic symptomatology
    corecore